Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [64]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

ROOT_DIR = "/home/ubuntu/carnd/datasets/traffic-signs-data/"

training_file = ROOT_DIR + "train.p"
validation_file= ROOT_DIR + "valid.p"
testing_file = ROOT_DIR + "test.p"

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']


assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))

print("Train count {}".format(len(X_train)))
print("Valid count {}".format(len(X_valid)))
print("Test count {}".format(len(X_test)))
Train count 34799
Valid count 4410
Test count 12630

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas

In [65]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results

import numpy as np

# TODO: Number of training examplestest
n_train = len(X_train)

# TODO: Number of testing examples.
n_valid = len(X_valid)
n_test = len(X_test)

# TODO: What's the shape of an traffic sign image?
image_shape = X_train[0].shape
print("Image shape {}".format(image_shape))

# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(np.unique(y_train))

print("Number of training examples =", n_train)
print("Number of validation examples =", n_valid)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Image shape (32, 32, 3)
Number of training examples = 34799
Number of validation examples = 4410
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [66]:
num_samples = 10
img_height, img_width = image_shape[0], image_shape[1]
def draw_samples(X, y, cls, num_cl, colormap = None):
    c_ids = np.where(y == cls)
    selected_idx = random.sample(c_ids[0].tolist(), num_samples)
    for i, idx in enumerate(selected_idx):
        plt.subplot(num_cl, num_samples, (num_samples * cls) + i + 1)
        plt.axis('off')
        plt.imshow(X[idx], colormap)
In [6]:
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

# index = random.randint(0, len(X_train))
# image = X_train[index].squeeze()

   

plt.figure(figsize=(num_samples, img_width))
for c in range(n_classes):
    draw_samples(X_train, y_train, c, n_classes)
# 
# plt.imshow(image)
# plt.axis('off')
# print(y_train[index])
In [8]:
# check data statistics

def get_stats(labels):
    stats = np.zeros(n_classes)
    for e in labels:
        stats[e] += 1
    return stats
# bar_width = 0.
def plot_stats(stats, title):
    plt.figure()
    x = range(n_classes)
    plt.title(title)
    plt.bar(x, stats)
    

train_stats = get_stats(y_train)
val_stats = get_stats(y_valid)
test_stats = get_stats(y_test)

plt.figure()
plot_stats(train_stats, "Training samples/class")
plot_stats(val_stats, "Validation samples/class")
plot_stats(test_stats, "Testing samples/class")
<matplotlib.figure.Figure at 0x7f5fd0d953c8>

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

With the LeNet-5 solution from the lecture, you should expect a validation set accuracy of about 0.89. To meet specifications, the validation set accuracy will need to be at least 0.93. It is possible to get an even higher accuracy, but 0.93 is the minimum for a successful project submission.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture (is the network over or underfitting?)
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

Pre-process the Data Set (normalization, grayscale, etc.)

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.

In [22]:
# some classes can be horizontally flipped.
# others can be vertically flipped
# others both horizontally and vertically
# others change meaning when flipped horiztonally
flip_horiz_c = [11, 12, 13, 15, 17, 18, 22, 26, 30, 35]
flip_vert_c = [1, 5, 12, 15, 17]
flip_both_c = [32, 40]
cross_flip_c = [[19, 20], [33, 34], [36, 37], [38, 39], [20, 19],  [34, 33],[37, 36], [39, 38]]

HFLIP, VFLIP, BOTH_FLIP, CROSS_FLIP = 0,1,2,3

# function to flip a dataset on a particular axis
def flip_dataset(X, y, classes_to_flip, flip_axis):
    X_flip = np.empty([0, X.shape[1], X.shape[2], X.shape[3]], dtype=np.uint8)
    y_flip = np.empty([0], dtype=np.int32)
    for c in classes_to_flip:
        if flip_axis == CROSS_FLIP:
            src_c = c[0]
            dest_c = c[1]
            count = (y==src_c).sum()
            X_flip = np.append(X_flip, X[y==src_c][:, :, ::-1, :], axis=0)
            y_flip = np.append(y_flip, np.full(count, dest_c, dtype=np.int32))
            continue
            
        count = (y==c).sum()
        if flip_axis == HFLIP:
            X_flip = np.append(X_flip, X[y==c][:, :, ::-1, :], axis=0)
        elif flip_axis == VFLIP:
             X_flip = np.append(X_flip, X[y==c][:, ::-1, :, :], axis=0)
        elif flip_axis == BOTH_FLIP:
             X_flip = np.append(X_flip, X[y==c][:, ::-1, ::-1, :], axis=0)
    
        y_flip = np.append(y_flip, np.full(count, c, dtype=np.int32))

    return X_flip, y_flip
In [23]:
X_hflip, y_hflip = flip_dataset(X_train, y_train, flip_horiz_c, HFLIP)
X_vflip, y_vflip = flip_dataset(X_train, y_train, flip_vert_c, VFLIP)
X_hvflip, y_hvflip = flip_dataset(X_train, y_train, flip_both_c, BOTH_FLIP)
X_cflip, y_cflip = flip_dataset(X_train, y_train, cross_flip_c, CROSS_FLIP)


X_train_flip = np.concatenate((X_hflip, X_vflip, X_hvflip, X_cflip), axis=0)
y_train_flip = np.concatenate((y_hflip, y_vflip, y_hvflip, y_cflip), axis=0)
flip_c = flip_horiz_c + flip_vert_c + flip_both_c + [c[1] for c in cross_flip_c]

    

print(X_train_flip.shape)
print(y_train_flip.shape)
(21569, 32, 32, 3)
(21569,)
In [10]:
plt.figure(figsize=(num_samples, img_width))
flip_c = sorted(flip_c)

for c in flip_c:
        draw_samples(X_train_flip, y_train_flip, c, num_cl=n_classes)
#     draw_samples(X_train_flip[:,:,:,0], y_train_flip, c, num_cl=n_classes)
In [24]:
# augment the training set with flipped images
X_train_all = np.concatenate((X_train, X_train_flip), axis=0)
y_train_all = np.concatenate((y_train, y_train_flip), axis=0)


train_stats = get_stats(y_train)
train_flip_stats =  get_stats(y_train_flip)
train_all_stats = get_stats(y_train_all)
print(y_train_all.shape)

# plt.figure()
plot_stats(train_stats, "Training samples/class")
plot_stats(train_flip_stats, "Training flipped samples/class")
plot_stats(train_all_stats, "Augmented samples/class")
(56368,)
In [37]:
# augment the dataset
# rotate the images with a small angle (-/+angle)

image_shape = X_train[0].shape
rows = image_shape[0]
cols = image_shape[1]

import cv2


max_count = max(train_all_stats)
print("Max no img / class {}".format(max_count))
COUNT_THRESH = 1500 # images


def rotate_images(X, class_count, c):
    count = class_count * 2
    rot = 1
    if class_count < 700:
        count *= 3 # 3 roations (-alpha/alpha)
        rot = 3
    elif class_count < 1200:
        count *= 2 # 2 rotations (-alpha/alpha)
        rot = 2
    elif class_count < 1500:
        count *= 1 # 1 rotations (-alpha/alpha)
        rot = 1
            
            
    X_rotated = np.empty([count, X.shape[1], X.shape[2], X.shape[3]], dtype=np.uint8)
    i = 0
    for x in X:
#         alpha = random.uniform(-A_RANGE, A_RANGE)
        alpha = 5
        for j in range(rot):
            M = cv2.getRotationMatrix2D((cols/2,rows/2),alpha,1)
            dst = cv2.warpAffine(x,M,(cols,rows))
            X_rotated[i] = dst
            i += 1
            
            M = cv2.getRotationMatrix2D((cols/2,rows/2), -alpha, 1)
            dst = cv2.warpAffine(x,M,(cols,rows))
            X_rotated[i] = dst
            i += 1
            
            alpha += 5
    
    
    y_rotated = np.full(count, c, dtype=np.int32)

    return X_rotated, y_rotated

rot_classes = []
X_rotated = np.empty([0, X_train.shape[1], X_train.shape[2], X_train.shape[3]], dtype=np.uint8)
y_rotated = np.empty([0], dtype=np.int32)

for c in range(n_classes):
    count = (y_train_all==c).sum()
    if count < COUNT_THRESH:
        X_c = X_train_all[y_train_all==c]
        X_rot, y_rot = rotate_images(X_c, count, c)
        X_rotated = np.append(X_rotated, X_rot, axis=0)
        y_rotated = np.append(y_rotated, y_rot, axis=0)
        rot_classes.append(c)

    
Max no img / class 5670.0
In [19]:
# draw samples of rotated images
print(X_rotated.shape)
print(y_rotated.shape)
plt.figure(figsize=(num_samples, img_width))
for c in rot_classes:
    draw_samples(X_rotated, y_rotated, c, num_cl=n_classes)
#      draw_samples(X_Y_val[:,:,:,0], y_valid, c, num_cl=n_classes, colormap='gray')
(77572, 32, 32, 3)
(77572,)
In [38]:
# augment the training set with rotated images


X_train_rotated = np.concatenate((X_train_all, X_rotated), axis=0)
y_train_rotated = np.concatenate((y_train_all, y_rotated), axis=0)

train_before_stats =  get_stats(y_train_all)
train_rot_stats =  get_stats(y_rotated)
train_all_rot_stats = get_stats(y_train_rotated)
print(y_train_rotated.shape)

# plt.figure()
plot_stats(train_rot_stats, "Training rotated samples/class")
plot_stats(train_before_stats, "Training samples/class before rotation")
plot_stats(train_all_rot_stats, "Augmented samples/class")
(133940,)
In [46]:
from sklearn.utils import shuffle
# X_train = X_train_rotated
# y_train = y_train_rotated

X_train_rotated, y_train_rotated = shuffle(X_train_rotated, y_train_rotated)
# X_train_all, y_train_all = shuffle(X_train_all, y_train_all)
print(X_train_rotated.shape)
print(y_train_rotated.shape)


train_all_stats = get_stats(y_train_rotated)

# plt.figure()
plot_stats(train_all_stats, "Training all samples/class")
(133940, 32, 32, 3)
(133940,)
In [144]:
plt.figure(figsize=(num_samples, img_width))

for c in range(n_classes):
        draw_samples(X_valid, y_valid, c, num_cl=n_classes)
In [47]:
# Convert an RGB image to YUV
# and aply luminance normalization
from skimage import exposure


def convert_yuv(rgb): 
    yuv = cv2.cvtColor(rgb, cv2.COLOR_RGB2YUV)
    
    #  luminance normalization
    Y = np.reshape(exposure.equalize_adapthist(yuv[:,:,0]), (32, 32, 1))
    UV = np.reshape(yuv[:,:,1:], (32, 32, 2))

    return Y, UV

def convert_imgs(X):
    n_img = X.shape[0]
    Y = np.ndarray(shape=(n_img, 32, 32, 1))
    UV = np.ndarray(shape=(n_img, 32, 32, 2))
    
    for i in range(n_img):
        Y[i], UV[i] = convert_yuv(X[i])
        if i % 1000 == 0:
            print("{}%".format(i * 100.0 / n_img))
            
        
    return Y, UV

X_Y_train, X_UV_train = convert_imgs(X_train_rotated)
X_Y_val, X_UV_val = convert_imgs(X_valid)
X_Y_test, X_UV_test = convert_imgs(X_test)
0.0%
0.7466029565477079%
1.4932059130954158%
2.2398088696431238%
2.9864118261908317%
3.7330147827385396%
4.4796177392862475%
5.226220695833955%
5.972823652381663%
6.719426608929371%
7.466029565477079%
8.212632522024787%
8.959235478572495%
9.705838435120203%
10.45244139166791%
11.199044348215619%
11.945647304763327%
12.692250261311035%
13.438853217858743%
14.18545617440645%
14.932059130954158%
15.678662087501866%
16.425265044049574%
17.171868000597282%
17.91847095714499%
18.665073913692698%
19.411676870240406%
20.158279826788114%
20.90488278333582%
21.65148573988353%
22.398088696431238%
23.144691652978945%
23.891294609526653%
24.63789756607436%
25.38450052262207%
26.131103479169777%
26.877706435717485%
27.624309392265193%
28.3709123488129%
29.11751530536061%
29.864118261908317%
30.610721218456025%
31.357324175003733%
32.103927131551444%
32.85053008809915%
33.59713304464686%
34.343736001194564%
35.090338957742276%
35.83694191428998%
36.58354487083769%
37.330147827385396%
38.07675078393311%
38.82335374048081%
39.56995669702852%
40.31655965357623%
41.06316261012394%
41.80976556667164%
42.556368523219355%
43.30297147976706%
44.04957443631477%
44.796177392862475%
45.54278034941019%
46.28938330595789%
47.0359862625056%
47.78258921905331%
48.52919217560102%
49.27579513214872%
50.022398088696434%
50.76900104524414%
51.51560400179185%
52.262206958339554%
53.008809914887266%
53.75541287143497%
54.50201582798268%
55.248618784530386%
55.9952217410781%
56.7418246976258%
57.48842765417351%
58.23503061072122%
58.98163356726893%
59.72823652381663%
60.474839480364345%
61.22144243691205%
61.96804539345976%
62.714648350007465%
63.46125130655518%
64.20785426310289%
64.95445721965059%
65.7010601761983%
66.447663132746%
67.19426608929372%
67.94086904584142%
68.68747200238913%
69.43407495893683%
70.18067791548455%
70.92728087203226%
71.67388382857996%
72.42048678512766%
73.16708974167538%
73.91369269822309%
74.66029565477079%
75.4068986113185%
76.15350156786621%
76.90010452441392%
77.64670748096162%
78.39331043750933%
79.13991339405705%
79.88651635060475%
80.63311930715246%
81.37972226370016%
82.12632522024788%
82.87292817679558%
83.61953113334329%
84.36613408989099%
85.11273704643871%
85.85934000298641%
86.60594295953412%
87.35254591608182%
88.09914887262954%
88.84575182917725%
89.59235478572495%
90.33895774227265%
91.08556069882037%
91.83216365536808%
92.57876661191578%
93.32536956846349%
94.0719725250112%
94.81857548155891%
95.56517843810661%
96.31178139465432%
97.05838435120204%
97.80498730774974%
98.55159026429745%
99.29819322084515%
0.0%
22.675736961451246%
45.35147392290249%
68.02721088435374%
90.70294784580499%
0.0%
7.917656373713381%
15.835312747426762%
23.752969121140143%
31.670625494853525%
39.5882818685669%
47.50593824228029%
55.423594615993665%
63.34125098970705%
71.25890736342043%
79.1765637371338%
87.09422011084719%
95.01187648456057%
In [48]:
import pickle
YUV_dataset = {}
YUV_dataset['train'] = (X_Y_train, X_UV_train, y_train_rotated)
YUV_dataset['valid'] = (X_Y_val, X_UV_val, y_valid)
YUV_dataset['test'] = (X_Y_test, X_UV_test, y_test)

with open("yuv_dataset.pkl", "wb") as f:
    pickle.dump(YUV_dataset, f, pickle.HIGHEST_PROTOCOL)
In [49]:
# show some converted Y samples
plt.figure(figsize=(num_samples, img_width))
for c in range(n_classes):
    draw_samples(X_Y_train[:,:,:,0], y_train_rotated, c, num_cl=n_classes, colormap='gray')

Model Architecture

In [61]:
from sklearn.utils import shuffle

# X_train, y_train = shuffle(X_train, y_train)
# X_train_all, y_train_all = shuffle(X_train_all, y_train_all)
# X_train_all, y_train_all = shuffle(X_train_all, y_train_all)
# X_train_rotated, y_train_rotated = shuffle(X_train_rotated, y_train_rotated)

# normalize U V channels

X_UV_train /= 255.0
X_UV_val /= 255.0
X_UV_test /= 255.0


X_Y_train, X_UV_train, y_train_rotated = shuffle(X_Y_train, X_UV_train, y_train_rotated)
In [208]:
import tensorflow as tf

EPOCHS = 10
BATCH_SIZE = 128


tf.reset_default_graph()


from tensorflow.contrib.layers import flatten

def LeNet(x):    
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1
    
    # SOLUTION: Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
    conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 3, 6), mean = mu, stddev = sigma))
    conv1_b = tf.Variable(tf.zeros(6))
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # SOLUTION: Activation.
    conv1 = tf.nn.relu(conv1)

    # SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
    conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
    conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
    conv2_b = tf.Variable(tf.zeros(16))
    conv2   = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    # SOLUTION: Activation.
    conv2 = tf.nn.relu(conv2)

    # SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
    conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # SOLUTION: Flatten. Input = 5x5x16. Output = 400.
    fc0   = flatten(conv2)
    
    # SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
    fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
    fc1_b = tf.Variable(tf.zeros(120))
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b
    
    # SOLUTION: Activation.
    fc1    = tf.nn.relu(fc1)

    # SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
    fc2_W  = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
    fc2_b  = tf.Variable(tf.zeros(84))
    fc2    = tf.matmul(fc1, fc2_W) + fc2_b
    
    # SOLUTION: Activation.
    fc2    = tf.nn.relu(fc2)

    # SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 43.
    fc3_W  = tf.Variable(tf.truncated_normal(shape=(84, n_classes), mean = mu, stddev = sigma))
    fc3_b  = tf.Variable(tf.zeros(n_classes))
    logits = tf.matmul(fc2, fc3_W) + fc3_b
    
    return logits
In [209]:
import tensorflow as tf


tf.reset_default_graph()

EPOCHS = 30
BATCH_SIZE = 512


from tensorflow.contrib.layers import flatten
from tensorflow.contrib.layers import xavier_initializer


def ConvNet(x):    
    
    exposed = {}
    # Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x32.
    conv1_W = tf.get_variable("conv1_W", shape=(5, 5, 1, 32), initializer=xavier_initializer())
    conv1_b = tf.Variable(tf.zeros(32), name="conv1_b")
    conv1   = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b

    # Layer 1 Activation.
    conv1 = tf.nn.relu(conv1)
    exposed['conv1'] = conv1

    # Pooling. Input = 28x28x6. Output = 14x14x32.
    conv1_pool = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
    
    # Layer 2: Convolutional. Input = 14x14x64. Output = 10x10x64
    conv2_W = tf.get_variable("conv2_W", shape=(5, 5, 32, 64), initializer=xavier_initializer())
    conv2_b = tf.Variable(tf.zeros(64), name="conv2_b")
    conv2   = tf.nn.conv2d(conv1_pool, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
    
    # Layer 2 Activation.
    conv2 = tf.nn.relu(conv2)
    exposed['conv2'] = conv2

    # Pooling. Input = 10x10x128. Output = 5x5x64.
    conv2_pool = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')

    # Flatten. Input = 14x14x32. Output = 6272.
    fc0_1   = flatten(conv1_pool)
    
    # Flatten. Input = 5x5x64. Output = 1600.
    fc0_2   = flatten(conv2_pool)
        
    # Concat. Output = 7872.
    fc0 = tf.concat(1, [fc0_1, fc0_2])
    
    
    # Layer 3: Fully Connected. Input = 7872. Output = 100.
    fc1_W = tf.get_variable("fc1_W", shape=(7872, 100), initializer=xavier_initializer())
    fc1_b = tf.Variable(tf.zeros(100), name="fc1_b")
    fc1   = tf.matmul(fc0, fc1_W) + fc1_b
    
    # Activation.
    fc1 = tf.nn.relu(fc1)
    
    # Apply Dropout
    fc1 = tf.nn.dropout(fc1, keep_prob=keep_prob)
    
    # Layer 4: Fully Connected. Input = 100. Output = 100.
    fc2_W = tf.get_variable("fc2_W", shape=(100, 100), initializer=xavier_initializer())
    fc2_b = tf.Variable(tf.zeros(100), name="fc2_b")
    fc2   = tf.matmul(fc1, fc2_W) + fc2_b
    
     # Activation.
    fc2 = tf.nn.relu(fc2)
    
    # Apply Dropout
    fc2 = tf.nn.dropout(fc2, keep_prob=keep_prob)
    

    # SOLUTION: Layer 5: Fully Connected. Input = 100. Output = 43.
    fc3_W = tf.get_variable("fc3_W", shape=(100, 43), initializer=xavier_initializer())
    fc3_b = tf.Variable(tf.zeros(43), name="fc3_b")
    logits   = tf.matmul(fc2, fc3_W) + fc3_b
    
    return logits, exposed

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.

In [211]:
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, n_classes)
keep_prob = tf.placeholder(tf.float32)
lr = tf.placeholder(tf.float32)
In [212]:
# logits = LeNet(x)
logits, exposed = ConvNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels=one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = lr)
training_operation = optimizer.minimize(loss_operation)
In [213]:
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
test_operation = tf.argmax(logits, 1)
saver = tf.train.Saver()

def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob: 1.0})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / num_examples
In [58]:
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    num_examples = len(X_Y_train)
    
    print("Training...")
    print()
    for i in range(EPOCHS):
        X_Y_train, X_UV_train, y_train_rotated = shuffle(X_Y_train, X_UV_train, y_train_rotated)
        for offset in range(0, num_examples, BATCH_SIZE):
            end = offset + BATCH_SIZE
            batch_x, batch_y = X_Y_train[offset:end], y_train_rotated[offset:end]
            sess.run(training_operation, feed_dict={x: batch_x, y: batch_y,  lr: 0.001, keep_prob: 0.75})
            
#         validation_accuracy = evaluate(X_valid, y_valid)
        validation_accuracy = evaluate(X_Y_val, y_valid)
        print("EPOCH {} ...".format(i+1))
        print("Validation Accuracy = {:.3f}".format(validation_accuracy))
        print()
        
    saver.save(sess, './convet_y')
    print("Model saved")
Training...

EPOCH 1 ...
Validation Accuracy = 0.893

EPOCH 2 ...
Validation Accuracy = 0.933

EPOCH 3 ...
Validation Accuracy = 0.960

EPOCH 4 ...
Validation Accuracy = 0.960

EPOCH 5 ...
Validation Accuracy = 0.966

EPOCH 6 ...
Validation Accuracy = 0.966

EPOCH 7 ...
Validation Accuracy = 0.969

EPOCH 8 ...
Validation Accuracy = 0.970

EPOCH 9 ...
Validation Accuracy = 0.967

EPOCH 10 ...
Validation Accuracy = 0.973

EPOCH 11 ...
Validation Accuracy = 0.975

EPOCH 12 ...
Validation Accuracy = 0.972

EPOCH 13 ...
Validation Accuracy = 0.976

EPOCH 14 ...
Validation Accuracy = 0.977

EPOCH 15 ...
Validation Accuracy = 0.980

EPOCH 16 ...
Validation Accuracy = 0.975

EPOCH 17 ...
Validation Accuracy = 0.980

EPOCH 18 ...
Validation Accuracy = 0.980

EPOCH 19 ...
Validation Accuracy = 0.980

EPOCH 20 ...
Validation Accuracy = 0.976

EPOCH 21 ...
Validation Accuracy = 0.982

EPOCH 22 ...
Validation Accuracy = 0.980

EPOCH 23 ...
Validation Accuracy = 0.977

EPOCH 24 ...
Validation Accuracy = 0.982

EPOCH 25 ...
Validation Accuracy = 0.980

EPOCH 26 ...
Validation Accuracy = 0.980

EPOCH 27 ...
Validation Accuracy = 0.978

EPOCH 28 ...
Validation Accuracy = 0.976

EPOCH 29 ...
Validation Accuracy = 0.978

EPOCH 30 ...
Validation Accuracy = 0.978

Model saved
In [269]:
with tf.Session() as sess:
#     saver.restore(sess, tf.train.latest_checkpoint('.'))
    saver.restore(sess, "./convet_y")

    test_accuracy = evaluate(X_Y_test, y_test)
    print("Test Accuracy = {:.3f}".format(test_accuracy))
    
Test Accuracy = 0.969

Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images

In [274]:
import os
test_dir = "imgs_small"
imgs = []
files = os.listdir(test_dir)
total = len(files)
plt.figure(figsize=(15,15))
for idx, e in enumerate(files):
    img = cv2.imread(os.path.join(test_dir, e))
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    plt.subplot(1, total, idx + 1)
    plt.axis('off')
#     img = cv2.resize(img,(32,32), cv2.INTER_LINEAR)
    plt.imshow(img)
    imgs.append(img)
    
    
In [275]:
# YUV conversion
X_imgs = np.asarray(imgs)
imgs_Y, imgs_UV = convert_imgs(X_imgs)

plt.figure(figsize=(15,15))
for idx, e in enumerate(imgs_Y):
    plt.subplot(1, total, idx + 1)
    plt.axis('off')
    plt.imshow(e[:,:,0], cmap='gray')
0.0%

Predict the Sign Type for Each Image

In [276]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
def predict(X):
    with tf.Session() as sess:
        saver.restore(sess, "./convet_y")

#         test_accuracy = evaluate(X_Y_test, y_test)
#         print("Test Accuracy = {:.3f}".format(test_accuracy))

        preds = sess.run(test_operation, feed_dict={x: X, keep_prob: 1.0})
        return preds

Analyze Performance

In [277]:
predictions = predict(imgs_Y)

import pandas
class_names = pandas.read_csv("./signnames.csv")
print(class_names['SignName'][predictions])

print("Total tests {}".format(total))
5      Speed limit (80km/h)
34          Turn left ahead
14                     Stop
5      Speed limit (80km/h)
17                 No entry
3      Speed limit (60km/h)
12            Priority road
25                Road work
7     Speed limit (100km/h)
35               Ahead only
35               Ahead only
13                    Yield
5      Speed limit (80km/h)
3      Speed limit (60km/h)
8     Speed limit (120km/h)
5      Speed limit (80km/h)
Name: SignName, dtype: object
Total tests 16

Calculate the accuracy for these 5 new images.

For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.

There are 16 test images. Among them, for 1 images there is no class in German traffic signs dataset. The images that were from the trained classes are highlighted.

  1. No passing for vehicles over 6 tons allowed. Predicted speed limit 80 km/h - a limitation - that's good since the CNN was not trained on the dataset.
  2. Turn left ahead. Predicted Turn Left ahead 1/1 correct
  3. Stop. Predicted Stop 2/2 correct
  4. Stop. Predicted Speed limit (80km/h) - the image looks poor resolution 2/3 correct
  5. No entry. Predicted No entry 3/4 correct
  6. Speed limit (60km/h). Predicted Speed limit (60km/h) 4/5 correct
  7. Priority road. Predicted Priority road 5/6 correct
  8. Pedestrian crossing. Predicted Road work. - The network was not trained on this class
  9. Speed Limit(130 km/h). Predicted Speed limit (100 km/h) - The netwok was no trained on 130km/h, but it managed to predict a spped limit that is close to 130 visually.
  10. Road work. Predicted Ahead only. 5/7 correct
  11. Pedestrian Lane. Predicted Ahead only. It was not trained on this class
  12. Yield. Predicted Yield 6/8 correct
  13. Speed limit (80km/h). Predicted Speed limit (80km/h) 7/9 correct
  14. Speed limit (60km/h). Predicted Speed limit (60km/h) 8/10 correct
  15. No passing for vehicles over 12 tons allowed. Predicted Speed limit (120km/h) - Very interesting - the network predicts another sign category that resembles closesly with 12t forbidden sign. It was not trained on 12 tons sign
  16. Speed Limit(130 km/h). Predicted Speed limit (80km/h) - again not trained on the class, but predicted a speed limit.

The accuracy for trained and recognizable traffic signs is

8/10 = 80 %

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

In [278]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
### Feel free to use as many code cells as needed.

def get_top_k(X, k):
    
    with tf.Session() as sess:
        saver.restore(sess, tf.train.latest_checkpoint('.'))
        prob = sess.run(tf.nn.top_k(tf.nn.softmax(logits), k=k),
                    feed_dict={x: X, keep_prob: 1})

    return prob
In [294]:
import matplotlib.gridspec as gridspec
top_k = 5
top_k_probs, top_k_ids = get_top_k(imgs_Y, top_k)

# Print out the top softmax probabilities
print(top_k_probs)
print(top_k_ids)
[[  9.98923004e-01   1.07456511e-03   1.41373505e-06   6.98244889e-07
    3.02501149e-07]
 [  9.99980807e-01   1.83431039e-05   7.97612245e-07   1.10133271e-08
    1.56967361e-09]
 [  9.99988794e-01   9.27752171e-06   1.67838311e-06   1.82972656e-07
    4.92795920e-08]
 [  9.89342451e-01   8.64973012e-03   7.73247797e-04   6.94173388e-04
    2.97522638e-04]
 [  1.00000000e+00   3.30587380e-09   3.69466749e-11   2.58884429e-12
    9.32466181e-14]
 [  1.00000000e+00   7.28099003e-09   2.27076469e-10   1.95838804e-10
    9.88548202e-11]
 [  9.99999762e-01   2.07334693e-07   1.43557844e-12   9.33630343e-13
    3.26811847e-13]
 [  6.22595429e-01   2.50771880e-01   1.13870569e-01   1.27554834e-02
    2.58052660e-06]
 [  9.83157694e-01   1.08306399e-02   6.01103948e-03   2.51508226e-07
    2.20726704e-07]
 [  8.83393943e-01   8.19297805e-02   2.43381336e-02   9.81474295e-03
    3.09412979e-04]
 [  5.09579003e-01   4.79141593e-01   8.18180665e-03   6.63841784e-04
    6.36644778e-04]
 [  9.99999762e-01   1.32850687e-07   9.12264397e-08   3.69694249e-08
    3.49914657e-08]
 [  9.99989271e-01   7.18323508e-06   2.98954706e-06   3.16854539e-07
    1.01195852e-07]
 [  9.43781018e-01   5.17857745e-02   3.91405215e-03   2.75980681e-04
    1.49664353e-04]
 [  9.97333407e-01   2.66519329e-03   1.22626000e-06   2.85781084e-07
    4.23761870e-09]
 [  9.88516688e-01   1.08605213e-02   5.99213818e-04   1.51381846e-05
    4.21285586e-06]]
[[ 5  3 10  2  7]
 [34 38 15 13 35]
 [14  7 33 15  8]
 [ 5  1 14 15  7]
 [17  9 34 14 39]
 [ 3  5  2  0  9]
 [12 40  2 15 35]
 [25 28 29 24 30]
 [ 7  5  8  1 40]
 [35 28 24 36 12]
 [35 34 36 11 32]
 [13 12 35 15  2]
 [ 5  3  1  2  4]
 [ 3  5  2  1 12]
 [ 8  7  5  4 14]
 [ 5  7  3  8 31]]
In [295]:
# Plot the top softmax probabilities

idx = top_k + 1 - np.arange(top_k)

def plot_image(img, grid):
    plt.subplot(grid)
    plt.axis("off")
    plt.imshow(img)

def plot_top_k_preds(grid, i):
    ax = plt.subplot(grid)
    plt.barh(idx, top_k_probs[i], align='center')
    labels = class_names['SignName'][top_k_ids[i]].values
    plt.yticks(idx, labels)
    ax.yaxis.tick_right()


def draw_image(img, i):
    plt.figure(figsize = (10,2))
    gs = gridspec.GridSpec(1, 2)
    plot_image(img, gs[0])
    plot_top_k_preds(gs[1], i)
        
for i in range(total):
    draw_image(imgs[i], i)

Step 4: Visualize the Neural Network's State with Test Images

This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.

Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.

For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.

Combined Image

Your output should look something like this (above)

In [188]:
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.

# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry

def outputFeatureMap(sess, image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
    # Here make sure to preprocess your image_input in a way your network expects
    # with size, normalization, ect if needed
    # image_input =
    # Note: x should be the same name as your network's tensorflow data placeholder variable
    # If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
    activation = tf_activation.eval(session=sess,feed_dict={x : image_input, keep_prob:1.0})
    featuremaps = activation.shape[3]
    plt.figure(plt_num, figsize=(15,15))
    for featuremap in range(featuremaps):
        plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
        plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
        if activation_min != -1 & activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
        elif activation_max != -1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
        elif activation_min !=-1:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
        else:
            plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
In [258]:
test_input = np.asarray(imgs_Y[0]).reshape((1,32,32,1))
print(test_input.shape)
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    outputFeatureMap(sess, test_input, exposed['conv1'])
        
(1, 32, 32, 1)
In [259]:
test_input = np.asarray(imgs_Y[1]).reshape((1,32,32,1))
print(test_input.shape)
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    outputFeatureMap(sess, test_input, exposed['conv1'])
        
(1, 32, 32, 1)
In [261]:
test_input = np.asarray(imgs_Y[2]).reshape((1,32,32,1))
print(test_input.shape)
with tf.Session() as sess:
    saver.restore(sess, tf.train.latest_checkpoint('.'))
    outputFeatureMap(sess, test_input, exposed['conv1'])
        
(1, 32, 32, 1)

Question 9

Discuss how you used the visual output of your trained network's feature maps to show that it had learned to look for interesting characteristics in traffic sign images

Answer:

The network looks for distinctive sign features on the firs convolutional layer activation (conv1).

STOP activation, the circle shape of the traffic sign can be easily seen in the conv1 feature map. I plotted the conv2 feature map, but the data seems more complex, and there doesn't seem to have any easily recognizable visual meaning.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.